Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neuroimage ; 221: 117128, 2020 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-32673745

RESUMO

Cross-scanner and cross-protocol variability of diffusion magnetic resonance imaging (dMRI) data are known to be major obstacles in multi-site clinical studies since they limit the ability to aggregate dMRI data and derived measures. Computational algorithms that harmonize the data and minimize such variability are critical to reliably combine datasets acquired from different scanners and/or protocols, thus improving the statistical power and sensitivity of multi-site studies. Different computational approaches have been proposed to harmonize diffusion MRI data or remove scanner-specific differences. To date, these methods have mostly been developed for or evaluated on single b-value diffusion MRI data. In this work, we present the evaluation results of 19 algorithms that are developed to harmonize the cross-scanner and cross-protocol variability of multi-shell diffusion MRI using a benchmark database. The proposed algorithms rely on various signal representation approaches and computational tools, such as rotational invariant spherical harmonics, deep neural networks and hybrid biophysical and statistical approaches. The benchmark database consists of data acquired from the same subjects on two scanners with different maximum gradient strength (80 and 300 â€‹mT/m) and with two protocols. We evaluated the performance of these algorithms for mapping multi-shell diffusion MRI data across scanners and across protocols using several state-of-the-art imaging measures. The results show that data harmonization algorithms can reduce the cross-scanner and cross-protocol variabilities to a similar level as scan-rescan variability using the same scanner and protocol. In particular, the LinearRISH algorithm based on adaptive linear mapping of rotational invariant spherical harmonics features yields the lowest variability for our data in predicting the fractional anisotropy (FA), mean diffusivity (MD), mean kurtosis (MK) and the rotationally invariant spherical harmonic (RISH) features. But other algorithms, such as DIAMOND, SHResNet, DIQT, CMResNet show further improvement in harmonizing the return-to-origin probability (RTOP). The performance of different approaches provides useful guidelines on data harmonization in future multi-site studies.


Assuntos
Algoritmos , Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Imagem de Difusão por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Neuroimagem/métodos , Adulto , Imagem de Difusão por Ressonância Magnética/instrumentação , Imagem de Difusão por Ressonância Magnética/normas , Humanos , Processamento de Imagem Assistida por Computador/normas , Neuroimagem/instrumentação , Neuroimagem/normas , Análise de Regressão
2.
Am J Hematol ; 95(8): 883-891, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32282969

RESUMO

Over 200 million malaria cases globally lead to half a million deaths annually. Accurate malaria diagnosis remains a challenge. Automated imaging processing approaches to analyze Thick Blood Films (TBF) could provide scalable solutions, for urban healthcare providers in the holoendemic malaria sub-Saharan region. Although several approaches have been attempted to identify malaria parasites in TBF, none have achieved negative and positive predictive performance suitable for clinical use in the west sub-Saharan region. While malaria parasite object detection remains an intermediary step in achieving automatic patient diagnosis, training state-of-the-art deep-learning object detectors requires the human-expert labor-intensive process of labeling a large dataset of digitized TBF. To overcome these challenges and to achieve a clinically usable system, we show a novel approach. It leverages routine clinical-microscopy labels from our quality-controlled malaria clinics, to train a Deep Malaria Convolutional Neural Network classifier (DeepMCNN) for automated malaria diagnosis. Our system also provides total Malaria Parasite (MP) and White Blood Cell (WBC) counts allowing parasitemia estimation in MP/µL, as recommended by the WHO. Prospective validation of the DeepMCNN achieves sensitivity/specificity of 0.92/0.90 against expert-level malaria diagnosis. Our approach PPV/NPV performance is of 0.92/0.90, which is clinically usable in our holoendemic settings in the densely populated metropolis of Ibadan. It is located within the most populous African country (Nigeria) and with one of the largest burdens of Plasmodium falciparum malaria. Our openly available method is of importance for strategies aimed to scale malaria diagnosis in urban regions where daily assessment of thousands of specimens is required.


Assuntos
Malária Falciparum/sangue , Malária/diagnóstico , Redes Neurais de Computação , Humanos , Malária/sangue
3.
IEEE Trans Pattern Anal Mach Intell ; 40(4): 834-848, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-28463186

RESUMO

In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

4.
Int J Comput Vis ; 118: 65-94, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27471340

RESUMO

Visual textures have played a key role in image understanding because they convey important semantics of images, and because texture representations that pool local image descriptors in an orderless manner have had a tremendous impact in diverse applications. In this paper we make several contributions to texture understanding. First, instead of focusing on texture instance and material category recognition, we propose a human-interpretable vocabulary of texture attributes to describe common texture patterns, complemented by a new describable texture dataset for benchmarking. Second, we look at the problem of recognizing materials and texture attributes in realistic imaging conditions, including when textures appear in clutter, developing corresponding benchmarks on top of the recently proposed OpenSurfaces dataset. Third, we revisit classic texture represenations, including bag-of-visual-words and the Fisher vectors, in the context of deep learning and show that these have excellent efficiency and generalization properties if the convolutional layers of a deep model are used as filter banks. We obtain in this manner state-of-the-art performance in numerous datasets well beyond textures, an efficient method to apply deep features to image regions, as well as benefit in transferring features from one domain to another.

5.
IEEE Trans Pattern Anal Mach Intell ; 35(7): 1744-56, 2013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23682000

RESUMO

In this paper, we use shape grammars (SGs) for facade parsing, which amounts to segmenting 2D building facades into balconies, walls, windows, and doors in an architecturally meaningful manner. The main thrust of our work is the introduction of reinforcement learning (RL) techniques to deal with the computational complexity of the problem. RL provides us with techniques such as Q-learning and state aggregation which we exploit to efficiently solve facade parsing. We initially phrase the 1D parsing problem in terms of a Markov Decision Process, paving the way for the application of RL-based tools. We then develop novel techniques for the 2D shape parsing problem that take into account the specificities of the facade parsing problem. Specifically, we use state aggregation to enforce the symmetry of facade floors and demonstrate how to use RL to exploit bottom-up, image-based guidance during optimization. We provide systematic results on the Paris building dataset and obtain state-of-the-art results in a fraction of the time required by previous methods. We validate our method under diverse imaging conditions and make our software and results available online.

6.
IEEE Trans Pattern Anal Mach Intell ; 31(8): 1486-501, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19542581

RESUMO

In this work, we formulate the interaction between image segmentation and object recognition in the framework of the Expectation-Maximization (EM) algorithm. We consider segmentation as the assignment of image observations to object hypotheses and phrase it as the E-step, while the M-step amounts to fitting the object models to the observations. These two tasks are performed iteratively, thereby simultaneously segmenting an image and reconstructing it in terms of objects. We model objects using Active Appearance Models (AAMs) as they capture both shape and appearance variation. During the E-step, the fidelity of the AAM predictions to the image is used to decide about assigning observations to the object. For this, we propose two top-down segmentation algorithms. The first starts with an oversegmentation of the image and then softly assigns image segments to objects, as in the common setting of EM. The second uses curve evolution to minimize a criterion derived from the variational interpretation of EM and introduces AAMs as shape priors. For the M-step, we derive AAM fitting equations that accommodate segmentation information, thereby allowing for the automated treatment of occlusions. Apart from top-down segmentation results, we provide systematic experiments on object detection that validate the merits of our joint segmentation and recognition approach.


Assuntos
Algoritmos , Inteligência Artificial , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Face/anatomia & histologia , Humanos , Curva ROC
7.
IEEE Trans Pattern Anal Mach Intell ; 31(1): 142-57, 2009 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-19029552

RESUMO

In this work we approach the analysis and segmentation of natural textured images by combining ideas from image analysis and probabilistic modeling. We rely on AM-FM texture models and specifically on the Dominant Component Analysis (DCA) paradigm for feature extraction. This method provides a low-dimensional, dense and smooth descriptor, capturing essential aspects of texture, namely scale, orientation, and contrast. Our contributions are at three levels of the texture analysis and segmentation problems: First, at the feature extraction stage we propose a Regularized Demodulation Algorithm that provides more robust texture features and explore the merits of modifying the channel selection criterion of DCA. Second, we propose a probabilistic interpretation of DCA and Gabor filtering in general, in terms of Local Generative Models. Extending this point of view to edge detection facilitates the estimation of posterior probabilities for the edge and texture classes. Third, we propose the Weighted Curve Evolution scheme that enhances the Region Competition/ Geodesic Active Regions methods by allowing for the locally adaptive fusion of heterogeneous cues. Our segmentation results are evaluated on the Berkeley Segmentation Benchmark, and compare favorably to current state-of-the-art methods.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...